Reduced order modeling methods are often used as a mean to reduce simulation costs in industrial applications. Despite their computational advantages, reduced order models (ROMs) often fail to accurately reproduce complex dynamics encountered in real life applications. To address this challenge, we leverage NeuralODEs to propose a novel ROM correction approach based on a time-continuous memory formulation. Finally, experimental results show that our proposed method provides a high level of accuracy while retaining the low computational costs inherent to reduced models.
translated by 谷歌翻译
流体(VOF)方法的体积被广泛用于多相流仿真中,以跟踪和定位两个不混溶的流体之间的界面。VOF方法的主要瓶颈是界面重建步骤,由于其高计算成本和非结构化网格的精度较低。我们建议基于图神经网络(GNN)的机器学习增强的VOF方法,以加速通用非结构化网格上的接口重建。我们首先开发一种方法来基于在非结构化网格上离散的抛物面表面生成合成数据集。然后,我们训练基于GNN的模型并执行概括测试。我们的结果表明,在工业背景下,基于GNN的界面重建方法的效率。
translated by 谷歌翻译
本文提出了一种新颖的域翻译方法。利用生成模型和动态系统之间建立的相似之处,我们提出了对循环构造的重新制定。通过将模型嵌入哈密顿结构,我们获得了一个连续,表现力且最重要的是域翻译的可逆生成模型。
translated by 谷歌翻译
This paper presents the OPUS ecosystem with a focus on the development of open machine translation models and tools, and their integration into end-user applications, development platforms and professional workflows. We discuss our on-going mission of increasing language coverage and translation quality, and also describe on-going work on the development of modular translation models and speed-optimized compact solutions for real-time translation on regular desktops and small devices.
translated by 谷歌翻译
标准化流量是一类深生成模型,比传统的蒙特卡洛模拟更有效地为晶格场理论提供了有希望的途径。在这项工作中,我们表明,随机归一化流的理论框架,其中神经网络层与蒙特卡洛更新结合在一起,与基于jarzynski平等的不平衡模拟的基础相同,这些模拟最近已被部署以计算计算晶格计理论的自由能差异。我们制定了一种策略,以优化这种扩展类别的生成模型的效率和应用程序的示例。
translated by 谷歌翻译
许多科学领域需要对复杂系统的时间行为的可靠预测。然而,这种强烈的兴趣是通过建模问题阻碍:通常,描述所考虑的系统物理学的控制方程是不可访问的,或者在已知时,它们的解决方案可能需要与预测时间约束不兼容的计算时间。如今,以通用功能格式近似复杂的系统,并从可用观察中通知IT Nihilo已成为一个常见的做法,如过去几年出现的巨大科学工作所示。许多基于深神经网络的成功示例已经可用,尽管易于忽视了模型和保证边缘的概括性。在这里,我们考虑长期内存神经网络,并彻底调查训练集的影响及其结构对长期预测的质量。利用ergodic理论,我们分析了保证物理系统忠实模型的先验的数据量。我们展示了根据系统不变的培训集的知情设计如何以及潜在的吸引子的结构,显着提高了所产生的模型,在积极学习的背景下开放研究。此外,将说明依赖于存储器能够的模型时内存初始化的非琐碎效果。我们的调查结果为有效数据驱动建模的任何复杂动态系统所需的数量和选择提供了基于证据的良好实践。
translated by 谷歌翻译
在2015年和2019年之间,地平线的成员2020年资助的创新培训网络名为“Amva4newphysics”,研究了高能量物理问题的先进多变量分析方法和统计学习工具的定制和应用,并开发了完全新的。其中许多方法已成功地用于提高Cern大型Hadron撞机的地图集和CMS实验所执行的数据分析的敏感性;其他几个人,仍然在测试阶段,承诺进一步提高基本物理参数测量的精确度以及新现象的搜索范围。在本文中,在研究和开发的那些中,最相关的新工具以及对其性能的评估。
translated by 谷歌翻译
Existing automated techniques for software documentation typically attempt to reason between two main sources of information: code and natural language. However, this reasoning process is often complicated by the lexical gap between more abstract natural language and more structured programming languages. One potential bridge for this gap is the Graphical User Interface (GUI), as GUIs inherently encode salient information about underlying program functionality into rich, pixel-based data representations. This paper offers one of the first comprehensive empirical investigations into the connection between GUIs and functional, natural language descriptions of software. First, we collect, analyze, and open source a large dataset of functional GUI descriptions consisting of 45,998 descriptions for 10,204 screenshots from popular Android applications. The descriptions were obtained from human labelers and underwent several quality control mechanisms. To gain insight into the representational potential of GUIs, we investigate the ability of four Neural Image Captioning models to predict natural language descriptions of varying granularity when provided a screenshot as input. We evaluate these models quantitatively, using common machine translation metrics, and qualitatively through a large-scale user study. Finally, we offer learned lessons and a discussion of the potential shown by multimodal models to enhance future techniques for automated software documentation.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Fruit is a key crop in worldwide agriculture feeding millions of people. The standard supply chain of fruit products involves quality checks to guarantee freshness, taste, and, most of all, safety. An important factor that determines fruit quality is its stage of ripening. This is usually manually classified by experts in the field, which makes it a labor-intensive and error-prone process. Thus, there is an arising need for automation in the process of fruit ripeness classification. Many automatic methods have been proposed that employ a variety of feature descriptors for the food item to be graded. Machine learning and deep learning techniques dominate the top-performing methods. Furthermore, deep learning can operate on raw data and thus relieve the users from having to compute complex engineered features, which are often crop-specific. In this survey, we review the latest methods proposed in the literature to automatize fruit ripeness classification, highlighting the most common feature descriptors they operate on.
translated by 谷歌翻译